2,528 research outputs found

    Quantum Mechanics of a Rotating Billiard

    Full text link
    Integrability of a square billiard is spontaneously broken as it rotates about one of its corners. The system becomes quasi-integrable where the invariant tori are broken with respect to a certain parameter, λ=2E/ω2\lambda = 2E/\omega^{2} where E is the energy of the particle inside the billiard and ω\omega is the angular frequency of rotation of billiard. We study the system classically and quantum mechanically in view of obtaining a correspondence in the two descriptions. Classical phase space in Poincar\'{e} surface of section shows transition from regular to chaotic motion as the parameter λ\lambda is decreased. In the Quantum counterpart, the spectral statistics shows a transition from Poisson to Wigner distribution as the system turns chaotic with decrease in λ\lambda. The wavefunction statistics however show breakdown of time-reversal symmetry as λ\lambda decreases

    A high-performance magnesium lattice clock: stability and accuracy analysis

    Get PDF
    Optical lattice clocks have reached uncertainties in 10^{-18} regime, well surpassing the primary microwave frequency standard. Such performance levels have allowed for applications from geodesy to fundamental physics. The performance of state of the art optical lattice clocks are strongly influenced by black body radiation (BBR) induced frequency shifts. Magnesium is one of the optical lattice clock candidate elements with very low sensitivity to BBR, which makes it an interesting candidate as an optical frequency reference. Optical lattice clocks rely on high-Q optical transitions, where Doppler and recoil shifts are suppressed by trapping the atoms in Lamb-Dicke regime. For Magnesium, due to its low atomic mass, the tunneling induced line-broadening is significantly large. This has been a bottleneck in reducing the instability of Magnesium lattice clock. However the large tunneling rate for Magnesium atoms in the optical lattice also allows us to study these lattice effects using optical spectroscopy. Lattice AC Stark shift is one of the important contributions to the uncertainty budget for an optical lattice clock. To achieve clock uncertainties in 10^{-18} regime, even the contributions from multipolar polarizabilities and hyperpolarizability becomes significant. Therefore, operational magic frequencies have been identified in Strontium and Ytterbium lattice clocks, where the light shift dependence on intensity is zero to the lowest order. In this thesis, an extensive model has been developed to understand the influence of tunneling in a one dimensional optical lattice on the clock transition lineshape. This model is used to simulate the spectroscopy results previously observed in our experiment, which show strong lineshape asymmetry as lattice wavelength is detuned from the magic condition. The strong influence of transverse states in generating these asymmetries was highlighted by numerical simulations. To improve the performance of our Magnesium lattice clock from the last frequency measurements, lattice system upgrades were carried out within the scope of this thesis. This allowed to suppress the tunneling induced line-broadening to sub-Hz regime for the first time for magnesium, and to resolve the 1S0-3P0 clock transition with a linewidth of 7(3) Hz. The high line-Q thus obtained of 9(3) x 10^{13} helped reduce the clock instability in self-comparison measurement to 7.2^{+7.7}_{-1.8} x 10^{-17} in 3000 seconds of averaging time. The improved clock instability also helped estimate various systematic shifts with much improved uncertainties. The probe AC Stark shift and Zeeman shift uncertainties were reduced to the mid-10^{-17} regime, while cold collision shift was characterized with uncertainty of 1.4 x 10^{-16}. With an aim to similarly reduce lattice AC Stark shift uncertainty, influence of higher order shifts was characterized for Magnesium for the first time. The hyperpolarizability coefficient was estimated to be 197(53) micro Hz/(kWcm^{-2})^2. These measurements show that the lattice shift can be characterized with an uncertainty of 6.5 x 10^{-16}, paving way for a future frequency measurement with more than an order of magnitude lower uncertainty

    Modeling Data Reuse in Deep Neural Networks by Taking Data-Types into Cognizance

    Get PDF
    In recent years, researchers have focused on reducing the model size and number of computations (measured as "multiply-accumulate" or MAC operations) of DNNs. The energy consumption of a DNN depends on both the number of MAC operations and the energy efficiency of each MAC operation. The former can be estimated at design time; however, the latter depends on the intricate data reuse patterns and underlying hardware architecture. Hence, estimating it at design time is challenging. This work shows that the conventional approach to estimate the data reuse, viz. arithmetic intensity, does not always correctly estimate the degree of data reuse in DNNs since it gives equal importance to all the data types. We propose a novel model, termed "data type aware weighted arithmetic intensity" (DIDI), which accounts for the unequal importance of different data types in DNNs. We evaluate our model on 25 state-of-the-art DNNs on two GPUs. We show that our model accurately models data-reuse for all possible data reuse patterns for different types of convolution and different types of layers. We show that our model is a better indicator of the energy efficiency of DNNs. We also show its generality using the central limit theorem.Comment: Accepted at IEEE Transactions on Computers (Special Issue on Machine-Learning Architectures and Accelerators) 202

    DeepReShape: Redesigning Neural Networks for Efficient Private Inference

    Full text link
    Prior work on Private Inference (PI)--inferences performed directly on encrypted input--has focused on minimizing a network's ReLUs, which have been assumed to dominate PI latency rather than FLOPs. Recent work has shown that FLOPs for PI can no longer be ignored and have high latency penalties. In this paper, we develop DeepReShape, a network redesign technique that tailors architectures to PI constraints, optimizing for both ReLUs and FLOPs for the first time. The {\em key insight} is that a strategic allocation of channels such that the network's ReLUs are aligned in their criticality order simultaneously optimizes ReLU and FLOPs efficiency. DeepReShape automates network development with an efficient process, and we call generated networks HybReNets. We evaluate DeepReShape using standard PI benchmarks and demonstrate a 2.1\% accuracy gain with a 5.2×\times runtime improvement at iso-ReLU on CIFAR-100 and an 8.7×\times runtime improvement at iso-accuracy on TinyImageNet. Furthermore, we demystify the input network selection in prior ReLU optimizations and shed light on the key network attributes enabling PI efficiency.Comment: 37 pages, 23 Figures, and 17 Table

    Innovative techniques for deployment of microservices in cloud-edge environment

    Get PDF
    PhD ThesisThe evolution of microservice architecture allows complex applications to be structured into independent modular components (microservices) making them easier to develop and manage. Complemented with containers, microservices can be deployed across any cloud and edge environment. Although containerized microservices are getting popular in industry, less research is available specially in the area of performance characterization and optimized deployment of microservices. Depending on the application type (e.g. web, streaming) and the provided functionalities (e.g. ltering, encryption/decryption, storage), microservices are heterogeneous with speci c functional and Quality of Service (QoS) requirements. Further, cloud and edge environments are also complex with a huge number of cloud providers and edge devices along with their host con gurations. Due to these complexities, nding a suitable deployment solution for microservices becomes challenging. To handle the deployment of microservices in cloud and edge environments, this thesis presents multilateral research towards microservice performance characterization, run-time evaluation and system orchestration. Considering a variety of applications, numerous algorithms and policies have been proposed, implemented and prototyped. The main contributions of this thesis are given below: Characterizes the performance of containerized microservices considering various types of interference in the cloud environment. Proposes and models an orchestrator, SDBO for benchmarking simple webapplication microservices in a multi-cloud environment. SDBO is validated using an e-commerce test web-application. Proposes and models an advanced orchestrator, GeoBench for the deployment of complex web-application microservices in a multi-cloud environment. GeoBench is validated using a geo-distributed test web-application. - i - Proposes and models a run-time deployment framework for distributed streaming application microservices in a hybrid cloud-edge environment. The model is validated using a real-world healthcare analytics use case for human activity recognition.

    E2GC: Energy-efficient Group Convolution in Deep Neural Networks

    Full text link
    The number of groups (gg) in group convolution (GConv) is selected to boost the predictive performance of deep neural networks (DNNs) in a compute and parameter efficient manner. However, we show that naive selection of gg in GConv creates an imbalance between the computational complexity and degree of data reuse, which leads to suboptimal energy efficiency in DNNs. We devise an optimum group size model, which enables a balance between computational cost and data movement cost, thus, optimize the energy-efficiency of DNNs. Based on the insights from this model, we propose an "energy-efficient group convolution" (E2GC) module where, unlike the previous implementations of GConv, the group size (GG) remains constant. Further, to demonstrate the efficacy of the E2GC module, we incorporate this module in the design of MobileNet-V1 and ResNeXt-50 and perform experiments on two GPUs, P100 and P4000. We show that, at comparable computational complexity, DNNs with constant group size (E2GC) are more energy-efficient than DNNs with a fixed number of groups (FggGC). For example, on P100 GPU, the energy-efficiency of MobileNet-V1 and ResNeXt-50 is increased by 10.8% and 4.73% (respectively) when E2GC modules substitute the FggGC modules in both the DNNs. Furthermore, through our extensive experimentation with ImageNet-1K and Food-101 image classification datasets, we show that the E2GC module enables a trade-off between generalization ability and representational power of DNN. Thus, the predictive performance of DNNs can be optimized by selecting an appropriate GG. The code and trained models are available at https://github.com/iithcandle/E2GC-release.Comment: Accepted as a conference paper in 2020 33rd International Conference on VLSI Design and 2020 19th International Conference on Embedded Systems (VLSID
    corecore